Search Results: "jas"

23 February 2017

Joerg Jaspert: Automated wifi login

If you have the fortune to need to follow some silly Login button for some wifi, regularly, the following little script may help you avoid this idiotic (and useless) task. This example uses the WIFIonICE, the free wifi on german ICE trains, simply as I have it twice a day, and got annoyed by the pointless Login button. A friend pointed me at just wget-ting the login page, so I made Network-Manager do this for me. Should work for anything similar that doesn t need some elaborate webform filled out.
#!/bin/bash
# (Some) docs at
# https://wiki.ubuntuusers.de/NetworkManager/Dispatcher/
IFACE=$ 1:-"none" 
ACTION=$ 2:-"up" 
case $ ACTION  in
    up)
        CONID=$ CONNECTION_ID:-$(iwconfig $IFACE   grep ESSID   cut -d":" -f2   sed 's/^[^"]*"\ "[^"]*$//g') 
        if [[ $ CONID  == WIFIonICE ]]; then
            /usr/bin/timeout -k 20 15 /usr/bin/wget -q -O - http://www.wifionice.de/?login > /dev/null
        fi
        ;;
    *)
        # We are not interested in this
        :
        ;;
esac
This script needs to be put into /etc/NetworkManager/dispatcher.d and made executable, owned by the root user. It will run on every connection change, thats why the ACTION is checked. The case may be a bit much here, but it could be easily extended to do a lot more. Yay, no more silly Open this webpage and press login crap.

21 February 2017

Shirish Agarwal: The Indian elections hungama

a person showing s(he) showing s(he) Before I start, I would like to point out #855549 . This is a normal/wishlist bug I have filed against apt, the command-line package manager. I sincerely believe having a history command to know what packages were installed, which were upgraded, which were purged should be easily accessible, easily understood and if the output looks pretty, so much the better. Of particular interest to me is having a list of new packages I have installed in last couple of years after jessie became the stable release. It probably would make for some interesting reading. I dunno how much efforts would be to code something like that, but if it works, it would be the greatest. Apt would have finally arrived. Not that it s a bad tool, it s just that it would then make for a heck of a useful tool. Coming back to the topic on hand, Now for the last couple of weeks we don t have water or rather pressure of water. Water crisis has been hitting Pune every year since 2014 with no end in sight. This has been reported in newspapers addendum but it seems it has been felling on deaf ears. The end result of it is that I have to bring buckets of water from around 50 odd metres. It s not a big thing, it s not like some women in some villages in Rajasthan who have to walk in between 200 metres to 5 odd kilometres to get potable water or Darfur, Western Sudan where women are often kidnapped and sold as sexual slaves when they get to fetch water. The situation in Darfur has been shown quite vividly in Darfur is Dying . It is possible that I may have mentioned about Darfur before. While unfortunately the game is in flash as a web resource, the most disturbing part is that the game is extremely depressing, there is a no-win scenario. So knowing and seeing both those scenarios, I can t complain about 50 metres. BUT .but when you extrapolate the same data over some more or less 3.3-3.4 million citizens, 3.1 million during 2011 census with a conservative 2.3-2.4 percent population growth rate according to scroll.in. Fortunately or unfortunately, Pune Municipal Corporation elections were held today. Fortunately or unfortunately, this time all the political parties bought majorly unknown faces in these elections. For e.g. I belong to ward 14 which is spread over quite a bit of area and has around 10k of registered voters. Now the unfortunate part of having new faces in elections, you don t know anything about them. Apart from the affidavits filed, the only thing I come to know is whether there are criminal cases filed against them and what they have shown as their wealth. While I am and should be thankful to ADR which actually is the force behind having the collated data made public. There is a lot of untold story about political push-back by all the major national and regional political parties even when this bit of news were to be made public. It took major part of a decade for such information to come into public domain. But for my purpose of getting clean air and water supply 24 7 to each household seems a very distant dream. I tried to connect with the corporators about a week before the contest and almost all of the lower party functionaries hid behind their political parties manifestos stating they would do the best without any viable plan. For those not knowing, India has been blessed with 6 odd national parties and about 36 odd regional parties and every election some 20-25 new parties try their luck every time. The problem is we, the public, don t trust them or their manifestos. First of all the political parties themselves engage in mud-slinging as to who s copying whom with the manifesto.Even if a political party wins the elections, there is no *real* pressure for them to follow their own manifesto. This has been going for many a year. OF course, we the citizens are to also blame as most citizens for one reason or other chose to remain aloof of the process. I scanned/leafed through all the manifestos and all of them have the vague-wording we will make Pune tanker-free without any implementation details. While I was unable to meet the soon-to-be-Corporators, I did manage to meet a few of the assistants but all the meetings were entirely fruitless. Diagram of Rain Water Harvesting I asked why can t the city follow the Chennai model. Chennai, not so long ago was at the same place where Pune is, especially in relation to water. What happened next, in 2001 has been beautifully chronicled in Hindustan Times . What has not been shared in that story is that the idea was actually fielded by one of Chennai Mayor s assistants, an IAS Officer, I have forgotten her name, Thankfully, her advise/idea was taken to heart by the political establishment and they drove RWH. Saying why we can t do something similar in Pune, I heard all kinds of excuses. The worst and most used being Marathas can never unite which I think is pure bullshit. For people unfamiliar to the term, Marathas was a warrior clan in Shivaji s army. Shivaji, the king of Marathas were/are an expert tactician and master of guerilla warfare. It is due to the valor of Marathas, that we still have the Maratha Light Infantry a proud member of the Indian army. Why I said bullshit was the composition of people living in Maharashtra has changed over the decades. While at one time both the Brahmins and the Marathas had considerable political and population numbers, that has changed drastically. Maharashtra and more pointedly, Mumbai, Pune and Nagpur have become immigrant centres. Why just a decade back, Shiv Sena, an ultra right-wing political party used to play the Maratha card at each and every election and heckle people coming from Uttar Pradesh and Bihar, this has been documented as the 2008 immigrants attacks and 9 years later we see Shiv Sena trying to field its candidates in Uttar Pradesh. So, obviously they cannot use the same tactics which they could at one point of time. One more reason I call it bullshit, is it s a very lame excuse. When the Prime Minister of the country calls for demonetization which affects 1.25 billion people, people die, people stand in queues and is largely peaceful, I do not see people resisting if they bring a good scheme. I almost forgot, as an added sweetener, the Chennai municipality said that if you do RWH and show photos and certificates of the job, you won t have to pay as much property tax as otherwise you would, that also boosted people s participation. And that is not the only solution, one more solution has been outlined in Aaj Bhi Khade hain talaab written by just-deceased Gandhian environmental activist Anupam Mishra. His Book can be downloaded for free at India Water Portal . Unfortunately, the said book doesn t have a good English translation till date. Interestingly, all of his content is licensed under public domain (CC-0) so people can continue to enjoy and learn from his life-work. Another lesson or understanding could be taken from Israel, the father of the modern micro-drip irrigation for crops. One of the things on my bucket lists is to visit Israel and if possible learn how they went from a water-deficient country to a water-surplus one. India labor Which brings me to my second conundrum, most of the people believe that it s the Government s job to provide jobs to its people. India has been experiencing jobless growth for around a decade now, since the 2008 meltdown. While India was lucky to escape that, most of its trading partners weren t hence it slowed down International trade which slowed down creation of new enterprises etc. Laws such as the Bankruptcy law and the upcoming Goods and Services Tax . As everybody else, am a bit excited and a bit apprehensive about how the actual implementation will take place. null Even International businesses has been found wanting. The latest example has been Uber and Ola. There have been protests against the two cab/taxi aggregators operating in India. For the millions of jobless students coming out of schools and Universities, there aren t simply enough jobs for them, nor are most (okay 50%) of them qualified for the jobs, these 50 percent are also untrainable, so what to do ? In reality, this is what keeps me awake at night. India is sitting on this ticking bomb-shell. It is really, a miracle that the youths have not rebelled yet. While all the conditions, proposals and counter-proposals have been shared before, I wanted/needed to highlight it. While the issue seems to be local, I would assert that they are all glocal in nature. The questions we are facing, I m sure both developing and to some extent even developed countries have probably been affected by it. I look forward to know what I can learn from them. Update 23/02/17 I had wanted to share about Debian s Voting system a bit, but that got derailed. Hence in order not to do, I ll just point towards 2015 platforms where 3 people vied for DPL post. I *think* I shared about DPL voting process earlier but if not, would do in detail in some future blog post.
Filed under: Miscellenous Tagged: #Anupam Mishra, #Bankruptcy law, #Chennai model, #clean air, #clean water, #elections, #GST, #immigrant, #immigrants, #Maratha, #Maratha Light Infantry, #migration, #national parties, #Political party manifesto, #regional parties, #ride-sharing, #water availability, Rain Water Harvesting

8 February 2017

Antoine Beaupr : Reliably generating good passwords

Passwords are used everywhere in our modern life. Between your email account and your bank card, a lot of critical security infrastructure relies on "something you know", a password. Yet there is little standard documentation on how to generate good passwords. There are some interesting possibilities for doing so; this article will look at what makes a good password and some tools that can be used to generate them. There is growing concern that our dependence on passwords poses a fundamental security flaw. For example, passwords rely on humans, who can be coerced to reveal secret information. Furthermore, passwords are "replayable": if your password is revealed or stolen, anyone can impersonate you to get access to your most critical assets. Therefore, major organizations are trying to move away from single password authentication. Google, for example, is enforcing two factor authentication for its employees and is considering abandoning passwords on phones as well, although we have yet to see that controversial change implemented. Yet passwords are still here and are likely to stick around for a long time until we figure out a better alternative. Note that in this article I use the word "password" instead of "PIN" or "passphrase", which all roughly mean the same thing: a small piece of text that users provide to prove their identity.

What makes a good password? A "good password" may mean different things to different people. I will assert that a good password has the following properties:
  • high entropy: hard to guess for machines
  • transferable: easy to communicate for humans or transfer across various protocols for computers
  • memorable: easy to remember for humans
High entropy means that the password should be unpredictable to an attacker, for all practical purposes. It is tempting (and not uncommon) to choose a password based on something else that you know, but unfortunately those choices are likely to be guessable, no matter how "secret" you believe it is. Yes, with enough effort, an attacker can figure out your birthday, the name of your first lover, your mother's maiden name, where you were last summer, or other secrets people think they have. The only solution here is to use a password randomly generated with enough randomness or "entropy" that brute-forcing the password will be practically infeasible. Considering that a modern off-the-shelf graphics card can guess millions of passwords per second using freely available software like hashcat, the typical requirement of "8 characters" is not considered enough anymore. With proper hardware, a powerful rig can crack such passwords offline within about a day. Even though a recent US National Institute of Standards and Technology (NIST) draft still recommends a minimum of eight characters, we now more often hear recommendations of twelve characters or fourteen characters. A password should also be easily "transferable". Some characters, like & or !, have special meaning on the web or the shell and can wreak havoc when transferred. Certain software also has policies of refusing (or requiring!) some special characters exactly for that reason. Weird characters also make it harder for humans to communicate passwords across voice channels or different cultural backgrounds. In a more extreme example, the popular Signal software even resorted to using only digits to transfer key fingerprints. They outlined that numbers are "easy to localize" (as opposed to words, which are language-specific) and "visually distinct". But the critical piece is the "memorable" part: it is trivial to generate a random string of characters, but those passwords are hard for humans to remember. As xkcd noted, "through 20 years of effort, we've successfully trained everyone to use passwords that are hard for human to remember but easy for computers to guess". It explains how a series of words is a better password than a single word with some characters replaced. Obviously, you should not need to remember all passwords. Indeed, you may store some in password managers (which we'll look at in another article) or write them down in your wallet. In those cases, what you need is not a password, but something I would rather call a "token", or, as Debian Developer Daniel Kahn Gillmor (dkg) said in a private email, a "high entropy, compact, and transferable string". Certain APIs are specifically crafted to use tokens. OAuth, for example, generates "access tokens" that are random strings that give access to services. But in our discussion, we'll use the term "token" in a broader sense. Notice how we removed the "memorable" property and added the "compact" one: we want to efficiently convert the most entropy into the shortest password possible, to work around possibly limiting password policies. For example, some bank cards only allow 5-digit security PINs and most web sites have an upper limit in the password length. The "compact" property applies less to "passwords" than tokens, because I assume that you will only use a password in select places: your password manager, SSH and OpenPGP keys, your computer login, and encryption keys. Everything else should be in a password manager. Those tools are generally under your control and should allow large enough passwords that the compact property is not particularly important.

Generating secure passwords We'll look now at how to generate a strong, transferable, and memorable password. These are most likely the passwords you will deal with most of the time, as security tokens used in other settings should actually never show up on screen: they should be copy-pasted or automatically typed in forms. The password generators described here are all operated from the command line. Password managers often have embedded password generators, but usually don't provide an easy way to generate a password for the vault itself. The previously mentioned xkcd cartoon is probably a common cultural reference in the security crowd and I often use it to explain how to choose a good passphrase. It turns out that someone actually implemented xkcd author Randall Munroe's suggestion into a program called xkcdpass:
    $ xkcdpass
    estop mixing edelweiss conduct rejoin flexitime
In verbose mode, it will show the actual entropy of the generated passphrase:
    $ xkcdpass -V
    The supplied word list is located at /usr/lib/python3/dist-packages/xkcdpass/static/default.txt.
    Your word list contains 38271 words, or 2^15.22 words.
    A 6 word password from this list will have roughly 91 (15.22 * 6) bits of entropy,
    assuming truly random word selection.
    estop mixing edelweiss conduct rejoin flexitime
Note that the above password has 91 bits of entropy, which is about what a fifteen-character password would have, if chosen at random from uppercase, lowercase, digits, and ten symbols:
    log2((26 + 26 + 10 + 10)^15) = approx. 92.548875
It's also interesting to note that this is closer to the entropy of a fifteen-letter base64 encoded password: since each character is six bits, you end up with 90 bits of entropy. xkcdpass is scriptable and easy to use. You can also customize the word list, separators, and so on with different command-line options. By default, xkcdpass uses the 2 of 12 word list from 12 dicts, which is not specifically geared toward password generation but has been curated for "common words" and words of different sizes. Another option is the diceware system. Diceware works by having a word list in which you look up words based on dice rolls. For example, rolling the five dice "1 4 2 1 4" would give the word "bilge". By rolling those dice five times, you generate a five word password that is both memorable and random. Since paper and dice do not seem to be popular anymore, someone wrote that as an actual program, aptly called diceware. It works in a similar fashion, except that passwords are not space separated by default:
    $ diceware
    AbateStripDummy16thThanBrock
Diceware can obviously change the output to look similar to xkcdpass, but can also accept actual dice rolls for those who do not trust their computer's entropy source:
    $ diceware -d ' ' -r realdice -w en_orig
    Please roll 5 dice (or a single dice 5 times).
    What number shows dice number 1? 4
    What number shows dice number 2? 2
    What number shows dice number 3? 6
    [...]
    Aspire O's Ester Court Born Pk
The diceware software ships with a few word lists, and the default list has been deliberately created for generating passwords. It is derived from the standard diceware list with additions from the SecureDrop project. Diceware ships with the EFF word list that has words chosen for better recognition, but it is not enabled by default, even though diceware recommends using it when generating passwords with dice. That is because the EFF list was added later on. The project is currently considering making the EFF list be the default. One disadvantage of diceware is that it doesn't actually show how much entropy the generated password has those interested need to compute it for themselves. The actual number depends on the word list: the default word list has 13 bits of entropy per word (since it is exactly 8192 words long), which means the default 6 word passwords have 78 bits of entropy:
    log2(8192) * 6 = 78
Both of these programs are rather new, having, for example, entered Debian only after the last stable release, so they may not be directly available for your distribution. The manual diceware method, of course, only needs a set of dice and a word list, so that is much more portable, and both the diceware and xkcdpass programs can be installed through pip. However, if this is all too complicated, you can take a look at Openwall's passwdqc, which is older and more widely available. It generates more memorable passphrases while at the same time allowing for better control over the level of entropy:
    $ pwqgen
    vest5Lyric8wake
    $ pwqgen random=78
    Theme9accord=milan8ninety9few
For some reason, passwdqc restricts the entropy of passwords between the bounds of 24 and 85 bits. That tool is also much less customizable than the other two: what you see here is pretty much what you get. The 4096-word list is also hardcoded in the C source code; it comes from a Usenet sci.crypt posting from 1997. A key feature of xkcdpass and diceware is that you can craft your own word list, which can make dictionary-based attacks harder. Indeed, with such word-based password generators, the only viable way to crack those passwords is to use dictionary attacks, because the password is so long that character-based exhaustive searches are not workable, since they would take centuries to complete. Changing from the default dictionary therefore brings some advantage against attackers. This may be yet another "security through obscurity" procedure, however: a naive approach may be to use a dictionary localized to your native language (for example, in my case, French), but that would deter only an attacker that doesn't do basic research about you, so that advantage is quickly lost to determined attackers. One should also note that the entropy of the password doesn't depend on which word list is chosen, only its length. Furthermore, a larger dictionary only expands the search space logarithmically; in other words, doubling the word-list length only adds a single bit of entropy. It is actually much better to add a word to your password than words to the word list that generates it.

Generating security tokens As mentioned before, most password managers feature a way to generate strong security tokens, with different policies (symbols or not, length, etc). In general, you should use your password manager's password-generation functionality to generate tokens for sites you visit. But how are those functionalities implemented and what can you do if your password manager (for example, Firefox's master password feature) does not actually generate passwords for you? pass, the standard UNIX password manager, delegates this task to the widely known pwgen program. It turns out that pwgen has a pretty bad track record for security issues, especially in the default "phoneme" mode, which generates non-uniformly distributed passwords. While pass uses the more "secure" -s mode, I figured it was worth removing that option to discourage the use of pwgen in the default mode. I made a trivial patch to pass so that it generates passwords correctly on its own. The gory details are in this email. It turns out that there are lots of ways to skin this particular cat. I was suggesting the following pipeline to generate the password:
    head -c $entropy /dev/random   base64   tr -d '\n='
The above command reads a certain number of bytes from the kernel (head -c $entropy /dev/random) encodes that using the base64 algorithm and strips out the trailing equal sign and newlines (for large passwords). This is what Gillmor described as a "high-entropy compact printable/transferable string". The priority, in this case, is to have a token that is as compact as possible with the given entropy, while at the same time using a character set that should cause as little trouble as possible on sites that restrict the characters you can use. Gillmor is a co-maintainer of the Assword password manager, which chose base64 because it is widely available and understood and only takes up 33% more space than the original 8-bit binary encoding. After a lengthy discussion, the pass maintainer, Jason A. Donenfeld, chose the following pipeline:
    read -r -n $length pass < <(LC_ALL=C tr -dc "$characters" < /dev/urandom)
The above is similar, except it uses tr to directly to read characters from the kernel, and selects a certain set of characters ($characters) that is defined earlier as consisting of [:alnum:] for letters and digits and [:graph:] for symbols, depending on the user's configuration. Then the read command extracts the chosen number of characters from the output and stores the result in the pass variable. A participant on the mailing list, Brian Candler, has argued that this wastes entropy as the use of tr discards bits from /dev/urandom with little gain in entropy when compared to base64. But in the end, the maintainer argued that reading "reading from /dev/urandom has no [effect] on /proc/sys/kernel/random/entropy_avail on Linux" and dismissed the objection. Another password manager, KeePass uses its own routines to generate tokens, but the procedure is the same: read from the kernel's entropy source (and user-generated sources in case of KeePass) and transform that data into a transferable string.

Conclusion While there are many aspects to password management, we have focused on different techniques for users and developers to generate secure but also usable passwords. Generating a strong yet memorable password is not a trivial problem as the security vulnerabilities of the pwgen software showed. Furthermore, left to their own devices, users will generate passwords that can be easily guessed by a skilled attacker, especially if they can profile the user. It is therefore essential we provide easy tools for users to generate strong passwords and encourage them to store secure tokens in password managers.
Note: this article first appeared in the Linux Weekly News.

4 February 2017

Thorsten Alteholz: My Debian Activities in January 2017

FTP assistant This month I only marked 146 packages for accept and rejected 25 packages. I only sent 3 emails to maintainers asking questions. Nevertheless I could pass a big mark. All in all I accepted more than 10000 packages now! Debian LTS This was my thirty-first month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian. This month my all in all workload has been 12.75h. During that time I did uploads of Unfortunately the upload of jasper had to be postponed, as there is no upstream fix for most of the open CVEs yet.
I also suggested to mark th slum-llnl CVE as , as the patch would be too invasive. Further I did another week of frontdesk work. Last but not least I took care of about 140 items of the TODO list[1]. Ok, it was not that much work, but the enormous number is impressing :-). I also had a look at [2] and filed bugs against two packages. Within hours the maintainers responded to that bugs, clarified everything to mark the CVEs as not-affected and nobody has to care about them anymore. This is a good example of how the knowledge of the maintainer can help the security teams! So, if you have some time left, have a look at [3] and take care of something. [1] https://security-tracker.debian.org/tracker/status/todo
[2] https://security-tracker.debian.org/tracker/status/unreported
[3] https://security-tracker.debian.org/tracker Other stuff This month I sponsored a new round of sidedoor and printrun. After advocating Dara Adib to become Debian Maintainer, I hope my activities as sponsor can be reduced again :-). Further I uploaded another version of setserial, but as you can see in #850762 it does not seem to satisfy everybody. I also uploaded new upstream versions of duktape and pipexec. As I didn t do any DOPOM in December I adopted two packages in January: pescetti and salliere. I dedicate those uploads to my aunt Birgit, who was a passionate bridge player. You will never be forgotten.

7 January 2017

Thorsten Alteholz: My Debian Activities in December 2016

FTP assistant This month I marked 367 packages for accept and rejected 45 packages. This time I only sent 10 emails to maintainers asking questions. Debian LTS This was my thirtieth month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian. This month my all in all workload has been 13.50h. During that time I did uploads of Other stuff The Debian Med Advent Calendar was really successful this year. As announced in [1] this year the second highest number of bugs has been closed during tht bug squashing:
year number of bugs closed
2011 63
2012 28
2013 73
2014 5
2015 150
2016 95
Well done everybody who participated! In December I also uploaded new upstream versions of duktape, fixed bugs in openzwave, did a binary upload for mpb on mipsel, sponsored openzwave-controlpanel, sidedoor and printrun.
Thanks to lamby that openzwave-controlpanel and sidedoor even made it into Stretch. Last but not least I want to wish everybody a Happy New Year. [1] https://lists.debian.org/debian-med/2016/12/msg00180.html

2 January 2017

Markus Koschany: My Free Software Activities in December 2016

Welcome to gambaru.de. Here is my monthly report that covers what I have been doing for Debian. If you re interested in Android, Java, Games and LTS topics, this might be interesting for you. Debian Android Debian Games Debian Java Debian LTS This was my tenth month as a paid contributor and I have been paid to work 13,5 hours on Debian LTS, a project started by Rapha l Hertzog. In that time I did the following: Non-maintainer uploads

20 December 2016

Reproducible builds folks: Reproducible Builds: week 86 in Stretch cycle

What happened in the Reproducible Builds effort between Sunday December 11 and Saturday December 17 2016: Reproducible builds world summit The 2nd Reproducible Builds World Summit was held in Berlin, Germany on December 13th-15th. The event was a great success with enthusiastic participation from an extremely diverse number of projects. Many thanks to our sponsors for making this event possible! Reproducible Summit 2 in Berlin 2016 Whilst there is an in-depth report forthcoming, the Guix project have already released their own report. Media coverage Reproducible work in other projects Documentation update A large number of revisions were made to the website during the summit, including re-structuring existing content and creating a concrete plan to move the wiki content to the website: Elsewhere in Debian Packages reviewed and fixed, and bugs filed Chris Lamb: Daniel Shahaf: Reiner Herrmann: Reviews of unreproducible packages 9 package reviews have been added, 19 have been updated and 17 have been removed in this week, adding to our knowledge about identified issues. 3 issue types have been added: One issue type was updated: Weekly QA work During our reproducibility testing, some FTBFS bugs have been detected and reported by: diffoscope development reprotest development trydiffoscope development Misc. This week's edition was written by Chris Lamb and reviewed by a bunch of Reproducible Builds folks on IRC and via email.

12 December 2016

Kees Cook: security things in Linux v4.9

Previously: v4.8. Here are a bunch of security things I m excited about in the newly released Linux v4.9: Latent Entropy GCC plugin Building on her earlier work to bring GCC plugin support to the Linux kernel, Emese Revfy ported PaX s Latent Entropy GCC plugin to upstream. This plugin is significantly more complex than the others that have already been ported, and performs extensive instrumentation of functions marked with __latent_entropy. These functions have their branches and loops adjusted to mix random values (selected at build time) into a global entropy gathering variable. Since the branch and loop ordering is very specific to boot conditions, CPU quirks, memory layout, etc, this provides some additional uncertainty to the kernel s entropy pool. Since the entropy actually gathered is hard to measure, no entropy is credited , but rather used to mix the existing pool further. Probably the best place to enable this plugin is on small devices without other strong sources of entropy. vmapped kernel stack and thread_info relocation on x86 Normally, kernel stacks are mapped together in memory. This meant that attackers could use forms of stack exhaustion (or stack buffer overflows) to reach past the end of a stack and start writing over another process s stack. This is bad, and one way to stop it is to provide guard pages between stacks, which is provided by vmalloced memory. Andy Lutomirski did a bunch of work to move to vmapped kernel stack via CONFIG_VMAP_STACK on x86_64. Now when writing past the end of the stack, the kernel will immediately fault instead of just continuing to blindly write. Related to this, the kernel was storing thread_info (which contained sensitive values like addr_limit) at the bottom of the kernel stack, which was an easy target for attackers to hit. Between a combination of explicitly moving targets out of thread_info, removing needless fields, and entirely moving thread_info off the stack, Andy Lutomirski and Linus Torvalds created CONFIG_THREAD_INFO_IN_TASK for x86. CONFIG_DEBUG_RODATA mandatory on arm64 As recently done for x86, Mark Rutland made CONFIG_DEBUG_RODATA mandatory on arm64. This feature controls whether the kernel enforces proper memory protections on its own memory regions (code memory is executable and read-only, read-only data is actually read-only and non-executable, and writable data is non-executable). This protection is a fundamental security primitive for kernel self-protection, so there s no reason to make the protection optional. random_page() cleanup Cleaning up the code around the userspace ASLR implementations makes them easier to reason about. This has been happening for things like the recent consolidation on arch_mmap_rnd() for ET_DYN and during the addition of the entropy sysctl. Both uncovered some awkward uses of get_random_int() (or similar) in and around arch_mmap_rnd() (which is used for mmap (and therefore shared library) and PIE ASLR), as well as in randomize_stack_top() (which is used for stack ASLR). Jason Cooper cleaned things up further by doing away with randomize_range() entirely and replacing it with the saner random_page(), making the per-architecture arch_randomize_brk() (responsible for brk ASLR) much easier to understand. That s it for now! Let me know if there are other fun things to call attention to in v4.9.

2016, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.
Creative Commons License

5 December 2016

Reproducible builds folks: Reproducible Builds: week 84 in Stretch cycle

What happened in the Reproducible Builds effort between Sunday November 27 and Saturday December 3 2016: Reproducible work in other projects Media coverage, etc. Bugs filed Chris Lamb: Clint Adams: Dafydd Harries: Daniel Shahaf: Reiner Herrmann: Valerie R Young: Reviews of unreproducible packages 15 package reviews have been added, 4 have been updated and 26 have been removed in this week, adding to our knowledge about identified issues. 2 issue types have been added: Weekly QA work During our reproducibility testing, some FTBFS bugs have been detected and reported by: diffoscope development Is is available now in Debian, Archlinux and on PyPI. strip-nondeterminism development reprotest development tests.reproducible-builds.org Misc. This week's edition was written by Chris Lamb, Valerie Young, Vagrant Cascadian, Holger Levsen and reviewed by a bunch of Reproducible Builds folks on IRC.

1 December 2016

Thorsten Alteholz: My Debian Activities in November 2016

FTP assistant This month I marked 377 packages for accept and rejected 36 packages. I also sent 13 emails to maintainers asking questions. Debian LTS This was my twenty-ninth month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian. This month my all in all workload has been 11h. During that time I did uploads of The upload of curl started as an embargoed one but the discussion about one fix took some time and the upload was a bit delayed. I also prepared a test package for jasper which takes care of nine CVEs and is available here. If you are interested in jasper, please download it and check whether everything is working in your environment. As upstream only takes care of CVEs/bugs at the moment, maybe we should not upload the old version with patches but the new version with all fixes. Any comments? Other stuff As it is again this time of the year, I would also like to draw some attention to the Debian Med Advent Calendar. Like the past years, the Debian Med team starts a bug squashing event from the December 1st to 24th. Every bug that is closed will be registered in the calendar. So instead of taking something from the calendar, this special one will be filled and at Christmas hopefully every Debian Med related bug is closed. Don t hestitate, start to squash :-). In November I also uploaded new versions of libmatthew-java, node-array-find-index, node-ejs, node-querystringify, node-require-dir, node-setimmediate, libkeepalive,
Further I added node-json5, node-emojis-list, node-big.js, node-eslint-plugin-flowtype to the NEW queue, sponsored an upload of node-lodash, adopted gnupg-pkcs11-scd, reverted the -fPIC-patch in libctl and fixed RC bugs in alljoyn-core-1504, alljoyn-core-1509, alljoyn-core-1604.

30 November 2016

Chris Lamb: Free software activities in November 2016

Here is my monthly update covering what I have been doing in the free software world (previous month):
Reproducible builds

Whilst anyone can inspect the source code of free software for malicious flaws, most software is distributed pre-compiled to end users. The motivation behind the Reproducible Builds effort is to permit verification that no flaws have been introduced either maliciously or accidentally during this compilation process by promising identical results are always generated from a given source, thus allowing multiple third-parties to come to a consensus on whether a build was compromised.

This month:

My work in the Reproducible Builds project was also covered in our weekly reports. (#80, #81, #82 #83.

Toolchain issues I submitted the following patches to fix reproducibility-related toolchain issues with Debian:

strip-nondeterminism

strip-nondeterminism is our tool to remove specific non-deterministic results from a completed build.


jenkins.debian.net

jenkins.debian.net runs our comprehensive testing framework.

  • buildinfo.debian.net has moved to SSL. (ac3b9e7)
  • Submit signing keys to keyservers after generation. (bdee6ff)
  • Various cosmetic changes, including
    • Prefer if X not in Y over if not X in Y. (bc23884)
    • No need for a dictionary; let's just use a set. (bf3fb6c)
    • Avoid DRY violation by using a for loop. (4125ec5)

I also submitted 9 patches to fix specific reproducibility issues in apktool, cairo-5c, lava-dispatcher, lava-server, node-rimraf, perlbrew, qsynth, tunnelx & zp.

Debian

Debian LTS This month I have been paid to work 11 hours on Debian Long Term Support (LTS). In that time I did the following:
  • "Frontdesk" duties, triaging CVEs, etc.
  • Issued DLA 697-1 for bsdiff fixing an arbitrary write vulnerability.
  • Issued DLA 705-1 for python-imaging correcting a number of memory overflow issues.
  • Issued DLA 713-1 for sniffit where a buffer overflow allowed a specially-crafted configuration file to provide a root shell.
  • Issued DLA 723-1 for libsoap-lite-perl preventing a Billion Laughs XML expansion attack.
  • Issued DLA 724-1 for mcabber fixing a roster push attack.

Uploads
  • redis:
    • 3.2.5-2 Tighten permissions of /var/ lib,log /redis. (#842987)
    • 3.2.5-3 & 3.2.5-4 Improve autopkgtest tests and install upstream's MANIFESTO and README.md documentation.
  • gunicorn (19.6.0-9) Adding autopkgtest tests.
  • libfiu:
    • 0.94-1 Add autopkgtest tests.
    • 0.95-1, 0.95-2 & 0.95-3 New upstream release and improve autopkgtest coverage.
  • python-django (1.10.3-1) New upstream release.
  • aptfs (0.8-3, 0.8-4 & 0.8-5) Adding and subsequently improving the autopkgtext tests.


I performed the following QA uploads:


Finally, I also made the following non-maintainer uploads:
  • libident (0.22-3.1) Move from obsolete Source-Version substvar to binary:Version. (#833195)
  • libpcl1 (1.6-1.1) Move from obsolete Source-Version substvar to binary:Version. (#833196)
  • pygopherd (2.0.18.4+nmu1) Move from obsolete Source-Version substvar to $ source:Version . (#833202)


RC bugs


I also filed 59 FTBFS bugs against arc-gui-clients, asyncpg, blhc, civicrm, d-feet, dpdk, fbpanel, freeciv, freeplane, gant, golang-github-googleapis-gax-go, golang-github-googleapis-proto-client-go, haskell-cabal-install, haskell-fail, haskell-monadcatchio-transformers, hg-git, htsjdk, hyperscan, jasperreports, json-simple, keystone, koji, libapache-mod-musicindex, libcoap, libdr-tarantool-perl, libmath-bigint-gmp-perl, libpng1.6, link-grammar, lua-sql, mediatomb, mitmproxy, ncrack, net-tools, node-dateformat, node-fuzzaldrin-plus, node-nopt, open-infrastructure-system-images, open-infrastructure-system-images, photofloat, ppp, ptlib, python-mpop, python-mysqldb, python-passlib, python-protobix, python-ttystatus, redland, ros-message-generation, ruby-ethon, ruby-nokogiri, salt-formula-ceilometer, spykeviewer, sssd, suil, torus-trooper, trash-cli, twisted-web2, uftp & wide-dhcpv6.

FTP Team

As a Debian FTP assistant I ACCEPTed 70 packages: bbqsql, coz-profiler, cross-toolchain-base, cross-toolchain-base-ports, dgit-test-dummy, django-anymail, django-hstore, django-html-sanitizer, django-impersonate, django-wkhtmltopdf, gcc-6-cross, gcc-defaults, gnome-shell-extension-dashtodock, golang-defaults, golang-github-btcsuite-fastsha256, golang-github-dnephin-cobra, golang-github-docker-go-events, golang-github-gogits-cron, golang-github-opencontainers-image-spec, haskell-debian, kpmcore, libdancer-logger-syslog-perl, libmoox-buildargs-perl, libmoox-role-cloneset-perl, libreoffice, linux-firmware-raspi3, linux-latest, node-babel-runtime, node-big.js, node-buffer-shims, node-charm, node-cliui, node-core-js, node-cpr, node-difflet, node-doctrine, node-duplexer2, node-emojis-list, node-eslint-plugin-flowtype, node-everything.js, node-execa, node-grunt-contrib-coffee, node-grunt-contrib-concat, node-jquery-textcomplete, node-js-tokens, node-json5, node-jsonfile, node-marked-man, node-os-locale, node-sparkles, node-tap-parser, node-time-stamp, node-wrap-ansi, ooniprobe, policycoreutils, pybind11, pygresql, pysynphot, python-axolotl, python-drizzle, python-geoip2, python-mockupdb, python-pyforge, python-sentinels, python-waiting, pythonmagick, r-cran-isocodes, ruby-unicode-display-width, suricata & voctomix-outcasts. I additionally filed 4 RC bugs against packages that had incomplete debian/copyright files against node-cliui, node-core-js, node-cpr & node-grunt-contrib-concat.

3 November 2016

Jan Wagner: Container Orchestration Thoughts

Container Orchestration ThoughtsSince some time everybody (read developer) want to run his new microservice stacks in containers. I can understand that building and testing an application is important for developers.
One of the benefits of containers is, that developer (in theory) can put their new version of applications into production on their own. This is the point where operations is affected and operations needs to evaluate, if that might evolve into better workflow. For yolo^WdevOps people there are some challenges that needs to be solved, or at least mitigated, when things needs to be done in large(r) scale.

Orchestration Engine Running Docker, which is actual the most preferred container solution, on a single host with docker command line client is something you can do, but there you leave the gap between dev and ops.

UI For Docker Since some time there is UI For Docker available for visualizing and managing containers on a single docker node. It's pretty awesome and the best feature so far is the Container Network view, which also shows the linked container. Container Orchestration Thoughts

Portainer Portainer is pretty new and it can be deployed as easy as UI For Docker. But the (first) great advantage: it can handle Docker Swarm. Beside that it has many other great features. Container Orchestration Thoughts

Rancher Rancher describes themselves as 'container management platform' that 'supports and manages all of your Kubernetes, Mesos, and Swarm clusters'. This is great because this are all of the relevant docker cluster orchestrations at the market actually. Container Orchestration Thoughts For the use cases, we are facing, Kubernetes and Mesos seems both like bloated beasts. Usman Ismail has written a really good comparison of Orchestration Engine options which goes into details. Container Orchestration Thoughts

Docker Swarm As there is actually no clear defacto standard/winner of the (container) orchestration wars, I would prevent to be in a vendor lock-in situation (yet). Docker swarm seems to be evolving and is getting more nice features other competitors doesn't provide.
Due the native integration into the docker framework and great community I believe Docker Swarm will be the Docker Orchestration of the choice on the long run. This should be supported by Rancher 1.2 which is not released yet.
From this point of view it looks very reasonable that Docker Swarm in combination with Rancher (1.2) might be a good strategy to maintain your container farms in the future. If you think to put Docker Swarm into production in the actual state, I recommend to read Docker swarm mode: What to know before going live on production by Panjamapong Sermsawatsri.

Persistent Storage While it is a best practice to use data volume container these days, providing persistent storage across multiple hosts for shared volumes seems to be tricky. In theory you can mount a shared-storage volume as a data volume and there are several volume plugins which supports shared storage. For example you can use the convoy plugin which gives you:
  • thin provisioned volumes
  • snapshots of volumes
  • backup of snapshots
  • restore volumes
As backend you can use:
  • Device Mapper
  • Virtual File System(VFS)/Network File System(NFS)
  • Amazon Elastic Block Store(EBS)
The good thing is, that convoy is integrated into Rancher. For more information I suggest to read Setting Up Shared Volumes with Convoy-NFS, which also mentions some limitations. If you want test Persistent Storage Service, Rancher provides some documentation. Actually I did not evaluate shared-storage volumes yet, but I don't see a solution I would love to use in production (at least on-premise) without strong downsides. But maybe things will go further and there might be a great solution for this caveats in the future.

Keeping base images up-to-date Since some time there are many projects that tries to detect security problems in your container images in several ways.
Beside general security considerations you need to deal somehow with issues in your base images that you build your applications on. Of course, even if you know you have a security issue in your application image, you need to fix it, which depends on the way how you based your application upon.

Ways to base your application image
  • You can build your application image entire from scratch, which leaves all the work to your development team and I wouldn't recommend it that way.
  • You also can create one (or more) intermediate image(s) that will be used by your development team.
  • The development team might ground their work on images in public available or private (for example the one bundled to your gitlab CI/CD solution) registries.

Whats the struggle with the base image? If you are using images being not (well) maintained by other people, you have to wait for them to fix your base image. Using external images might also lead into trust problems (can you trust those people in general?).
In an ideal world, your developers have always fresh base images with fixed security issues. This can probably be done by rebuilding every intermediate image periodically or when the base image changes.

Paradigm change Anyway, if you have a new application image available (with no known security issues), you need to deploy it to production. This is summarized by Jason McKay in his article Docker Security: How to Monitor and Patch Containers in the Cloud:
To implement a patch, update the base image and then rebuild the application image. This will require systems and development teams to work closely together.
So patching security issues in the container world changes workflow significant. In the old world operation teams mostly rolled security fixes for the base systems independent from development teams.
Now hitting containers the production area this might change things significant.

Bringing updated images to production Imagine your development team doesn't work steady on a project, cause the product owner consider it feature complete. The base image is provided (in some way) consistently without security issues. The application image is build on top of that automatically on every update of the base image.
How do you push in such a scenario the security fixes to production? From my point of view you have two choices:
  • Let the development team require to test the resulting application image and put it into production
  • Push the new application image without review by the development team into production
The first scenario might lead into a significant delay until the fixes hit production created by the probably infrequent work of the development team. The latter one brings your security fixes early to production by the notable higher risk to break your application. This risk can be reduced by implementing massive tests into CI/CD pipelines by the development team. Rolling updates provided by Docker Swarm might also reduce the risk of ending with a broken application. When you are implementing an update process of your (application) images to production, you should consider Watchtower that provides Automatic Updates for Docker Containers.

Conclusion Not being a product owner or the operations part of an application that is facing a widely adopted usage that would compensate the actual tradeoffs we are still facing I tend not to move large scale production projects into a container environment.
This means not that this might be a bad idea for others, but I'd like to sort out some of the caveats before. I'm still interested to put smaller projects into production, being not scared to reimplement or move them on a new stack.
For smaller projects with a small number of hosts Portainer looks not bad as well as Rancher with the Cattle orchestration engine if you just want to manage a couple of nodes. Things are going to be interesting if Rancher 1.2 supports Docker swarm cluster out of the box. Let's see what the future will bring us to the container world and how to make a great stack out of it.

Update I suggest to read Docker in Production: A History of Failure and the answer Docker in Production: A retort to understand the actual challenges when running Docker in larger scale production environments.

2 November 2016

Markus Koschany: My Free Software Activities in October 2016

Welcome to gambaru.de. Here is my monthly report that covers what I have been doing for Debian. If you re interested in Android, Java, Games and LTS topics, this might be interesting for you. Debian Android Debian Games Debian Java Debian LTS This was my eight month as a paid contributor and I have been paid to work 13 hours on Debian LTS, a project started by Rapha l Hertzog. In that time I did the following: Non-maintainer uploads QA

29 October 2016

Jaldhar Vyas: Dawkins Weasel

Happy Dhanteras from Bappy Lahiri
It's already Dhan Terash so I better pick up the pace if I want to beat my blogging challenge before Diwali so in this post I'll discuss a program I wrote earlier this year.
I dread to look up anything on Wikipedia because I always end up going down a rabbit hole and surfacing hours later on a totally unrelated topic. Case in point, some months ago, I ended up on the page of the title. This is an interesting little experiment illustrating how random selection can result in the evolution of a specific form. The algorithm is:

  1. Start with a random string of 28 characters.
  2. Make 100 copies of this string, with a 5% chance per character of that character being replaced with a random character.
  3. Compare each new string with "METHINKS IT IS LIKE A WEASEL", and give each a score (the number of letters in the string that are correct and in the correct position).
  4. If any of the new strings has a perfect score (== 28), halt.
  5. Otherwise, take the highest scoring string, and go to step 2.
I had to try this myself so I wrote a little implementation in C++. A sample run looks like this:
  
$ ./weasel
0000 DNCFICBLUZVC JF KKNVJJASCJRW (0)
0001 DNIFICOLUZVC JFLIKNVAJASCJEW (6)
0002 DNNWICKSUZVCRSFLIKNVA ASCJEL (11)
0003 DNNWICKSUZVCRSFLIKNVA ASCJEL (11)
0004 MNNVICKSQZVCRSFLIKNVA WSCJEL (13)
0005 MENVICKSQZVCRSFLIKNVA WSCJEL (14)
0006 MENVISKS ZTCRSFLIKNVA WLCJEL (16)
0007 MENVISKS ZTCRSFLIKNVA WLCJEL (16)
0008 MEDHISKS ZTCISFLIKNVA WLCJEL (18)
0009 MEDHISKS ZTCISFLIKNVA WLCJEL (18)
0010 MEDHISKS ZTCISFLIKNVA WLCJEL (18)
0011 MEDHISKS ZTCIS LIKTKA WLCZEL (19)
0012 MEDHISKS ZTCIS LIKTKA WLCZEL (19)
0013 MEDHISKS ZTCIS LIKT A WLCZEL (20)
0014 MEDHISKS ZTCIS LIKT A WLCZEL (20)
0015 MEDHISKS ZTCIS LIKE A WLAZEL (22)
0016 MEDHIGKS ITCIS LIKE A WLAZEL (23)
0017 MEDHIGKS ITCIS LIKE A WLAZEL (23)
0018 MEDHIGKS ITCIS LIKE A WLAZEL (23)
0019 MEDHIGKS ITCIS LIKE A WLAZEL (23)
0020 MEDHIGKS ITCIS LIKE A WLAZEL (23)
0021 MEDHIGKS ITCIS LIKE A WLAZEL (23)
0022 METHINKS ITCIS LIKE A WLASEL (26)
0023 METHINKS ITCIS LIKE A WLASEL (26)
0024 METHINKS ITCIS LIKE A WLASEL (26)
0025 METHINKS ITCIS LIKE A WEASEL (27)
0026 METHINKS ITCIS LIKE A WEASEL (27)
0027 METHINKS ITCIS LIKE A WEASEL (27)
0028 METHINKS ITCIS LIKE A WEASEL (27)
0029 METHINKS ITCIS LIKE A WEASEL (27)
0030 METHINKS ITCIS LIKE A WEASEL (27)
0031 METHINKS ITCIS LIKE A WEASEL (27)
0032 METHINKS ITCIS LIKE A WEASEL (27)
0033 METHINKS ITCIS LIKE A WEASEL (27)
0034 METHINKS ITCIS LIKE A WEASEL (27)
0035 METHINKS ITCIS LIKE A WEASEL (27)
0036 METHINKS ITCIS LIKE A WEASEL (27)
0037 METHINKS ITCIS LIKE A WEASEL (27)
0038 METHINKS ITCIS LIKE A WEASEL (27)
0039 METHINKS ITCIS LIKE A WEASEL (27)
0040 METHINKS ITCIS LIKE A WEASEL (27)
0041 METHINKS ITCIS LIKE A WEASEL (27)
0042 METHINKS ITCIS LIKE A WEASEL (27)
0043 METHINKS ITCIS LIKE A WEASEL (27)
0044 METHINKS ITCIS LIKE A WEASEL (27)
0045 METHINKS ITCIS LIKE A WEASEL (27)
0046 METHINKS ITCIS LIKE A WEASEL (27)
0047 METHINKS ITCIS LIKE A WEASEL (27)
0048 METHINKS ITCIS LIKE A WEASEL (27)
0049 METHINKS ITCIS LIKE A WEASEL (27)
0050 METHINKS ITCIS LIKE A WEASEL (27)
0051 METHINKS ITCIS LIKE A WEASEL (27)
0052 METHINKS ITCIS LIKE A WEASEL (27)
0053 METHINKS ITCIS LIKE A WEASEL (27)
0054 METHINKS IT IS LIKE A WEASEL (28)

My program lets you adjust the input string, the number of copies, and the mutation threshold. I also thought it might be interesting to implement the Generator design pattern. In C++ this is done by making a class which implements begin() and end() methods and atleast a forward iterator. You can find the source code on Github.

20 October 2016

H ctor Or n Mart nez: Build a Debian package against Debian 8.0 using Download On Demand (DoD) service

In the previous post Open Build Service software architecture has been overviewed. In the current blog post, a tutorial on setting up a package build with OBS from Debian packages is presented. Steps: Generate a test environment by creating Stretch/SID VM Really, use whatever suits you best, but please create an untrusted test environment for this one. In the current tutorial it assumes $hostname is stretch , which should be stretch or sid suite. Be aware that copy & paste configuration files from current post might lead you into broken characters (i.e. ). Debian Stretch weekly netinst CD Enable experimental repository
# echo "deb http://httpredir.debian.org/debian experimental main" >> /etc/apt/sources.list.d/experimental.list
# apt-get update
Install and setup OBS server, api, worker and osc CLI packages
# apt-get install obs-server obs-api obs-worker osc
In the install process mysql database is needed, therefore if mysql server is not setup, a password needs to be provided.
When OBS API database obs-api is created, we need to pick a password for it, provide opensuse . The obs-api package will configure apache2 https webserver (creating a dummy certificate for stretch ) to serve OBS webui.
Add stretch and obs aliases to localhost entry in your /etc/hosts file.
Enable worker by setting ENABLED=1 in /etc/default/obsworker
Try to connect to the web UI https://stretch/
Login into OBS webui, default login credentials: Admin/opensuse).
From command line tool, try to list projects in OBS
 $ osc -A https://stretch ls
Accept dummy certificate and provide credentials (defaults: Admin/opensuse)
If the install proceeds as expected follow to the next step. Ensure all OBS services are running
# backend services
obsrun     813  0.0  0.9 104960 20448 ?        Ss   08:33   0:03 /usr/bin/perl -w /usr/lib/obs/server/bs_dodup
obsrun     815  0.0  1.5 157512 31940 ?        Ss   08:33   0:07 /usr/bin/perl -w /usr/lib/obs/server/bs_repserver
obsrun    1295  0.0  1.6 157644 32960 ?        S    08:34   0:07  \_ /usr/bin/perl -w /usr/lib/obs/server/bs_repserver
obsrun     816  0.0  1.8 167972 38600 ?        Ss   08:33   0:08 /usr/bin/perl -w /usr/lib/obs/server/bs_srcserver
obsrun    1296  0.0  1.8 168100 38864 ?        S    08:34   0:09  \_ /usr/bin/perl -w /usr/lib/obs/server/bs_srcserver
memcache   817  0.0  0.6 346964 12872 ?        Ssl  08:33   0:11 /usr/bin/memcached -m 64 -p 11211 -u memcache -l 127.0.0.1
obsrun     818  0.1  0.5  78548 11884 ?        Ss   08:33   0:41 /usr/bin/perl -w /usr/lib/obs/server/bs_dispatch
obsserv+   819  0.0  0.3  77516  7196 ?        Ss   08:33   0:05 /usr/bin/perl -w /usr/lib/obs/server/bs_service
mysql      851  0.0  0.0   4284  1324 ?        Ss   08:33   0:00 /bin/sh /usr/bin/mysqld_safe
mysql     1239  0.2  6.3 1010744 130104 ?      Sl   08:33   1:31  \_ /usr/sbin/mysqld --basedir=/usr --datadir=/var/lib/mysql --plugin-dir=/usr/lib/mysql/plugin --log-error=/var/log/mysql/error.log --pid-file=/var/run/mysqld/mysqld.pid --socket=/var/run/mysqld/mysqld.sock --port=3306
# web services
root      1452  0.0  0.1 110020  3968 ?        Ss   08:34   0:01 /usr/sbin/apache2 -k start
root      1454  0.0  0.1 435992  3496 ?        Ssl  08:34   0:00  \_ Passenger watchdog
root      1460  0.3  0.2 651044  5188 ?        Sl   08:34   1:46      \_ Passenger core
nobody    1465  0.0  0.1 444572  3312 ?        Sl   08:34   0:00      \_ Passenger ust-router
www-data  1476  0.0  0.1 855892  2608 ?        Sl   08:34   0:09  \_ /usr/sbin/apache2 -k start
www-data  1477  0.0  0.1 856068  2880 ?        Sl   08:34   0:09  \_ /usr/sbin/apache2 -k start
www-data  1761  0.0  4.9 426868 102040 ?       Sl   08:34   0:29 delayed_job.0
www-data  1767  0.0  4.8 425624 99888 ?        Sl   08:34   0:30 delayed_job.1
www-data  1775  0.0  4.9 426516 101708 ?       Sl   08:34   0:28 delayed_job.2
nobody    1788  0.0  5.7 496092 117480 ?       Sl   08:34   0:03 Passenger RubyApp: /usr/share/obs/api
nobody    1796  0.0  4.9 488888 102176 ?       Sl   08:34   0:00 Passenger RubyApp: /usr/share/obs/api
www-data  1814  0.0  4.5 282576 92376 ?        Sl   08:34   0:22 delayed_job.1000
www-data  1829  0.0  4.4 282684 92228 ?        Sl   08:34   0:22 delayed_job.1010
www-data  1841  0.0  4.5 282932 92536 ?        Sl   08:34   0:22 delayed_job.1020
www-data  1855  0.0  4.9 427988 101492 ?       Sl   08:34   0:29 delayed_job.1030
www-data  1865  0.2  5.0 492500 102964 ?       Sl   08:34   1:09 clockworkd.clock
www-data  1899  0.0  0.0  87100  1400 ?        S    08:34   0:00 /usr/bin/searchd --pidfile --config /usr/share/obs/api/config/production.sphinx.conf
www-data  1900  0.1  0.4 161620  8276 ?        Sl   08:34   0:51  \_ /usr/bin/searchd --pidfile --config /usr/share/obs/api/config/production.sphinx.conf
# OBS worker
root      1604  0.0  0.0  28116  1492 ?        Ss   08:34   0:00 SCREEN -m -d -c /srv/obs/run/worker/boot/screenrc
root      1605  0.0  0.9  75424 18764 pts/0    Ss+  08:34   0:06  \_ /usr/bin/perl -w ./bs_worker --hardstatus --root /srv/obs/worker/root_1 --statedir /srv/obs/run/worker/1 --id stretch:1 --reposerver http://obs:5252 --jobs 1
Create an OBS project for Download on Demand (DoD) Create a meta project file:
$ osc -A https://stretch:443 meta prj Debian:8 -e
<project name= Debian:8 >
<title>Debian 8 DoD</title>
<description>Debian 8 DoD</description>
<person userid= Admin role= maintainer />
<repository name= main >
<download arch= x86_64 url= http://deb.debian.org/debian/jessie/main repotype= deb />
<arch>x86_64</arch>
</repository>
</project>
Visit webUI to check project configuration Create a meta project configuration file:
$ osc -A https://stretch:443 meta prjconf Debian:8 -e
Add the following file, as found at build.opensuse.org
Repotype: debian
# create initial user
Preinstall: base-passwd
Preinstall: user-setup
# required for preinstall images
Preinstall: perl
# preinstall essentials + dependencies
Preinstall: base-files base-passwd bash bsdutils coreutils dash debconf
Preinstall: debianutils diffutils dpkg e2fslibs e2fsprogs findutils gawk
Preinstall: gcc-4.9-base grep gzip hostname initscripts insserv libacl1
Preinstall: libattr1 libblkid1 libbz2-1.0 libc-bin libc6 libcomerr2 libdb5.3
Preinstall: libgcc1 liblzma5 libmount1 libncurses5 libpam-modules
Preinstall: libpcre3 libsmartcols1
Preinstall: libpam-modules-bin libpam-runtime libpam0g libreadline6
Preinstall: libselinux1 libsemanage-common libsemanage1 libsepol1 libsigsegv2
Preinstall: libslang2 libss2 libtinfo5 libustr-1.0-1 libuuid1 login lsb-base
Preinstall: mount multiarch-support ncurses-base ncurses-bin passwd perl-base
Preinstall: readline-common sed sensible-utils sysv-rc sysvinit sysvinit-utils
Preinstall: tar tzdata util-linux zlib1g
Runscripts: base-passwd user-setup base-files gawk
VMinstall: libdevmapper1.02.1
Order: user-setup:base-files
# Essential packages (this should also pull the dependencies)
Support: base-files base-passwd bash bsdutils coreutils dash debianutils
Support: diffutils dpkg e2fsprogs findutils grep gzip hostname libc-bin 
Support: login mount ncurses-base ncurses-bin perl-base sed sysvinit 
Support: sysvinit-utils tar util-linux
# Build-essentials
Required: build-essential
Prefer: build-essential:make
# build script needs fakeroot
Support: fakeroot
# lintian support would be nice, but breaks too much atm
#Support: lintian
# helper tools in the chroot
Support: less kmod net-tools procps psmisc strace vim
# everything below same as for Debian:6.0 (apart from the version macros ofc)
# circular dependendencies in openjdk stack
Order: openjdk-6-jre-lib:openjdk-6-jre-headless
Order: openjdk-6-jre-headless:ca-certificates-java
Keep: binutils cpp cracklib file findutils gawk gcc gcc-ada gcc-c++
Keep: gzip libada libstdc++ libunwind
Keep: libunwind-devel libzio make mktemp pam-devel pam-modules
Keep: patch perl rcs timezone
Prefer: cvs libesd0 libfam0 libfam-dev expect
Prefer: gawk locales default-jdk
Prefer: xorg-x11-libs libpng fam mozilla mozilla-nss xorg-x11-Mesa
Prefer: unixODBC libsoup glitz java-1_4_2-sun gnome-panel
Prefer: desktop-data-SuSE gnome2-SuSE mono-nunit gecko-sharp2
Prefer: apache2-prefork openmotif-libs ghostscript-mini gtk-sharp
Prefer: glib-sharp libzypp-zmd-backend mDNSResponder
Prefer: -libgcc-mainline -libstdc++-mainline -gcc-mainline-c++
Prefer: -libgcj-mainline -viewperf -compat -compat-openssl097g
Prefer: -zmd -OpenOffice_org -pam-laus -libgcc-tree-ssa -busybox-links
Prefer: -crossover-office -libgnutls11-dev
# alternative pkg-config implementation
Prefer: -pkgconf
Prefer: -openrc
Prefer: -file-rc
Conflict: ghostscript-library:ghostscript-mini
Ignore: sysvinit:initscripts
Ignore: aaa_base:aaa_skel,suse-release,logrotate,ash,mingetty,distribution-release
Ignore: gettext-devel:libgcj,libstdc++-devel
Ignore: pwdutils:openslp
Ignore: pam-modules:resmgr
Ignore: rpm:suse-build-key,build-key
Ignore: bind-utils:bind-libs
Ignore: alsa:dialog,pciutils
Ignore: portmap:syslogd
Ignore: fontconfig:freetype2
Ignore: fontconfig-devel:freetype2-devel
Ignore: xorg-x11-libs:freetype2
Ignore: xorg-x11:x11-tools,resmgr,xkeyboard-config,xorg-x11-Mesa,libusb,freetype2,libjpeg,libpng
Ignore: apache2:logrotate
Ignore: arts:alsa,audiofile,resmgr,libogg,libvorbis
Ignore: kdelibs3:alsa,arts,pcre,OpenEXR,aspell,cups-libs,mDNSResponder,krb5,libjasper
Ignore: kdelibs3-devel:libvorbis-devel
Ignore: kdebase3:kdebase3-ksysguardd,OpenEXR,dbus-1,dbus-1-qt,hal,powersave,openslp,libusb
Ignore: kdebase3-SuSE:release-notes
Ignore: jack:alsa,libsndfile
Ignore: libxml2-devel:readline-devel
Ignore: gnome-vfs2:gnome-mime-data,desktop-file-utils,cdparanoia,dbus-1,dbus-1-glib,krb5,hal,libsmbclient,fam,file_alteration
Ignore: libgda:file_alteration
Ignore: gnutls:lzo,libopencdk
Ignore: gnutls-devel:lzo-devel,libopencdk-devel
Ignore: pango:cairo,glitz,libpixman,libpng
Ignore: pango-devel:cairo-devel
Ignore: cairo-devel:libpixman-devel
Ignore: libgnomeprint:libgnomecups
Ignore: libgnomeprintui:libgnomecups
Ignore: orbit2:libidl
Ignore: orbit2-devel:libidl,libidl-devel,indent
Ignore: qt3:libmng
Ignore: qt-sql:qt_database_plugin
Ignore: gtk2:libpng,libtiff
Ignore: libgnomecanvas-devel:glib-devel
Ignore: libgnomeui:gnome-icon-theme,shared-mime-info
Ignore: scrollkeeper:docbook_4,sgml-skel
Ignore: gnome-desktop:libgnomesu,startup-notification
Ignore: python-devel:python-tk
Ignore: gnome-pilot:gnome-panel
Ignore: gnome-panel:control-center2
Ignore: gnome-menus:kdebase3
Ignore: gnome-main-menu:rug
Ignore: libbonoboui:gnome-desktop
Ignore: postfix:pcre
Ignore: docbook_4:iso_ent,sgml-skel,xmlcharent
Ignore: control-center2:nautilus,evolution-data-server,gnome-menus,gstreamer-plugins,gstreamer,metacity,mozilla-nspr,mozilla,libxklavier,gnome-desktop,startup-notification
Ignore: docbook-xsl-stylesheets:xmlcharent
Ignore: liby2util-devel:libstdc++-devel,openssl-devel
Ignore: yast2:yast2-ncurses,yast2-theme-SuSELinux,perl-Config-Crontab,yast2-xml,SuSEfirewall2
Ignore: yast2-core:netcat,hwinfo,wireless-tools,sysfsutils
Ignore: yast2-core-devel:libxcrypt-devel,hwinfo-devel,blocxx-devel,sysfsutils,libstdc++-devel
Ignore: yast2-packagemanager-devel:rpm-devel,curl-devel,openssl-devel
Ignore: yast2-devtools:perl-XML-Writer,libxslt,pkgconfig
Ignore: yast2-installation:yast2-update,yast2-mouse,yast2-country,yast2-bootloader,yast2-packager,yast2-network,yast2-online-update,yast2-users,release-notes,autoyast2-installation
Ignore: yast2-bootloader:bootloader-theme
Ignore: yast2-packager:yast2-x11
Ignore: yast2-x11:sax2-libsax-perl
Ignore: openslp-devel:openssl-devel
Ignore: java-1_4_2-sun:xorg-x11-libs
Ignore: java-1_4_2-sun-devel:xorg-x11-libs
Ignore: kernel-um:xorg-x11-libs
Ignore: tetex:xorg-x11-libs,expat,fontconfig,freetype2,libjpeg,libpng,ghostscript-x11,xaw3d,gd,dialog,ed
Ignore: yast2-country:yast2-trans-stats
Ignore: susehelp:susehelp_lang,suse_help_viewer
Ignore: mailx:smtp_daemon
Ignore: cron:smtp_daemon
Ignore: hotplug:syslog
Ignore: pcmcia:syslog
Ignore: avalon-logkit:servlet
Ignore: jython:servlet
Ignore: ispell:ispell_dictionary,ispell_english_dictionary
Ignore: aspell:aspel_dictionary,aspell_dictionary
Ignore: smartlink-softmodem:kernel,kernel-nongpl
Ignore: OpenOffice_org-de:myspell-german-dictionary
Ignore: mediawiki:php-session,php-gettext,php-zlib,php-mysql,mod_php_any
Ignore: squirrelmail:mod_php_any,php-session,php-gettext,php-iconv,php-mbstring,php-openssl
Ignore: simias:mono(log4net)
Ignore: zmd:mono(log4net)
Ignore: horde:mod_php_any,php-gettext,php-mcrypt,php-imap,php-pear-log,php-pear,php-session,php
Ignore: xerces-j2:xml-commons-apis,xml-commons-resolver
Ignore: xdg-menu:desktop-data
Ignore: nessus-libraries:nessus-core
Ignore: evolution:yelp
Ignore: mono-tools:mono(gconf-sharp),mono(glade-sharp),mono(gnome-sharp),mono(gtkhtml-sharp),mono(atk-sharp),mono(gdk-sharp),mono(glib-sharp),mono(gtk-sharp),mono(pango-sharp)
Ignore: gecko-sharp2:mono(glib-sharp),mono(gtk-sharp)
Ignore: vcdimager:libcdio.so.6,libcdio.so.6(CDIO_6),libiso9660.so.4,libiso9660.so.4(ISO9660_4)
Ignore: libcdio:libcddb.so.2
Ignore: gnome-libs:libgnomeui
Ignore: nautilus:gnome-themes
Ignore: gnome-panel:gnome-themes
Ignore: gnome-panel:tomboy
Substitute: utempter
%ifnarch s390 s390x ppc ia64
Substitute: java2-devel-packages java-1_4_2-sun-devel
%else
 %ifnarch s390x
Substitute: java2-devel-packages java-1_4_2-ibm-devel
 %else
Substitute: java2-devel-packages java-1_4_2-ibm-devel xorg-x11-libs-32bit
 %endif
%endif
Substitute: yast2-devel-packages docbook-xsl-stylesheets doxygen libxslt perl-XML-Writer popt-devel sgml-skel update-desktop-files yast2 yast2-devtools yast2-packagemanager-devel yast2-perl-bindings yast2-testsuite
#
# SUSE compat mappings
#
Substitute: gcc-c++ gcc
Substitute: libsigc++2-devel libsigc++-2.0-dev
Substitute: glibc-devel-32bit
Substitute: pkgconfig pkg-config
%ifarch %ix86
Substitute: kernel-binary-packages kernel-default kernel-smp kernel-bigsmp kernel-debug kernel-um kernel-xen kernel-kdump
%endif
%ifarch ia64
Substitute: kernel-binary-packages kernel-default kernel-debug
%endif
%ifarch x86_64
Substitute: kernel-binary-packages kernel-default kernel-smp kernel-xen kernel-kdump
%endif
%ifarch ppc
Substitute: kernel-binary-packages kernel-default kernel-kdump kernel-ppc64 kernel-iseries64
%endif
%ifarch ppc64
Substitute: kernel-binary-packages kernel-ppc64 kernel-iseries64
%endif
%ifarch s390
Substitute: kernel-binary-packages kernel-s390
%endif
%ifarch s390x
Substitute: kernel-binary-packages kernel-default
%endif
%define debian_version 800
Macros:
%debian_version 800
Visit webUI to check project configuration Create an OBS project linked to DoD
$ osc -A https://stretch:443 meta prj test -e
<project name= test >
<title>test</title>
<description>test</description>
<person userid= Admin role= maintainer />
<repository name= Debian_8.0 >
<path project= Debian:8 repository= main />
<arch>x86_64</arch>
</repository>
</project>
Visit webUI to check project configuration Adding a package to the project
$ osc -A https://stretch:443 co test ; cd test
$ mkdir hello ; cd hello ; apt-get source -d hello ; cd - ; 
$ osc add hello 
$ osc ci -m "New import" hello
The package should go to dispatched state then get in blocked state while it downloads build dependencies from DoD link, eventually it should start building. Please check the journal logs to check if something went wrong or gets stuck. Visit webUI to check hello package build state OBS logging to the journal Check in the journal logs everything went fine:
$ sudo journalctl -u obsdispatcher.service -u obsdodup.service -u obsscheduler@x86_64.service -u obsworker.service -u obspublisher.service
Troubleshooting Currently we are facing few issues with web UI: And there are more issues that have not been reported, please do reportbug obs-api .

25 August 2016

Joerg Jaspert: New gnupg-agent in Debian

In case you just upgraded to the latest gnupg-agent and used gnupg-agent as your ssh-agent you may find that ssh refuses to work with a simple but not helpful sign_and_send_pubkey: signing failed: agent refused operation This seems to come from systemd starting the agent, no longer a script at the start of the X session. And so it ends up with either no or an unusable tty. A simple
gpg-connect-agent updatestartuptty /bye
updates that and voila, ssh agent functionality is back in. Note: This assumes you have enable-ssh-support in your ~/.gnupg/gpg-agent.conf

22 August 2016

Vincent Sanders: Down the rabbit hole

My descent began with a user reporting a bug and I fear I am still on my way down.

Like Alice I headed down the hole. https://commons.wikimedia.org/wiki/File:Rabbit_burrow_entrance.jpg
The bug was simple enough, a windows bitmap file caused NetSurf to crash. Pretty quickly this was tracked down to the libnsbmp library attempting to decode the file. As to why we have a heavily used library for bitmaps? I am afraid they are part of every icon file and many websites still have favicons using that format.

Some time with a hex editor and the file format specification soon showed that the image in question was malformed and had a bad offset header entry. So I was faced with two issues, firstly that the decoder crashed when presented with badly encoded data and secondly that it failed to deal with incorrect header data.

This is typical of bug reports from real users, the obvious issues have already been encountered by the developers and unit tests formed to prevent them, what remains is harder to produce. After a debugging session with Valgrind and electric fence I discovered the crash was actually caused by running off the front of an allocated block due to an incorrect bounds check. Fixing the bounds check was simple enough as was working round the bad header value and after adding a unit test for the issue I almost moved on.

Almost...

american fuzzy lop are almost as cute as cats https://commons.wikimedia.org/wiki/File:Rabbit_american_fuzzy_lop_buck_white.jpg
We already used the bitmap test suite of images to check the library decode which was giving us a good 75% or so line coverage (I long ago added coverage testing to our CI system) but I wondered if there was a test set that might increase the coverage and perhaps exercise some more of the bounds checking code. A bit of searching turned up the american fuzzy lop (AFL) projects synthetic corpora of bmp and ico images.

After checking with the AFL authors that the images were usable in our project I added them to our test corpus and discovered a whole heap of trouble. After fixing more bounds checks and signed issues I finally had a library I was pretty sure was solid with over 85% test coverage.

Then I had the idea of actually running AFL on the library. I had been avoiding this because my previous experimentation with other fuzzing utilities had been utter frustration and very poor return on investment of time. Following the quick start guide looked straightforward enough so I thought I would spend a short amount of time and maybe I would learn a useful tool.

I downloaded the AFL source and built it with a simple make which was an encouraging start. The library was compiled in debug mode with AFL instrumentation simply by changing the compiler and linker environment variables.

$ LD=afl-gcc CC=afl-gcc AFL_HARDEN=1 make VARIANT=debug test
afl-cc 2.32b by <lcamtuf@google.com>
afl-cc 2.32b by <lcamtuf@google.com>
COMPILE: src/libnsbmp.c
afl-cc 2.32b by <lcamtuf@google.com>
afl-as 2.32b by <lcamtuf@google.com>
[+] Instrumented 751 locations (64-bit, hardened mode, ratio 100%).
AR: build-x86_64-linux-gnu-x86_64-linux-gnu-debug-lib-static/libnsbmp.a
COMPILE: test/decode_bmp.c
afl-cc 2.32b by <lcamtuf@google.com>
afl-as 2.32b by <lcamtuf@google.com>
[+] Instrumented 52 locations (64-bit, hardened mode, ratio 100%).
LINK: build-x86_64-linux-gnu-x86_64-linux-gnu-debug-lib-static/test_decode_bmp
afl-cc 2.32b by <lcamtuf@google.com>
COMPILE: test/decode_ico.c
afl-cc 2.32b by <lcamtuf@google.com>
afl-as 2.32b by <lcamtuf@google.com>
[+] Instrumented 65 locations (64-bit, hardened mode, ratio 100%).
LINK: build-x86_64-linux-gnu-x86_64-linux-gnu-debug-lib-static/test_decode_ico
afl-cc 2.32b by <lcamtuf@google.com>
Test bitmap decode
Tests:606 Pass:606 Error:0
Test icon decode
Tests:392 Pass:392 Error:0
TEST: Testing complete

I stuffed the AFL build directory on the end of my PATH, created a directory for the output and ran afl-fuzz

afl-fuzz -i test/bmp -o findings_dir -- ./build-x86_64-linux-gnu-x86_64-linux-gnu-debug-lib-static/test_decode_bmp @@ /dev/null

The result was immediate and not a little worrying, within seconds there were crashes and lots of them! Over the next couple of hours I watched as the unique crash total climbed into the triple digits.

I was forced to abort the run at this point as, despite clear warnings in the AFL documentation of the demands of the tool, my laptop was clearly not cut out to do this kind of work and had become distressingly hot.

AFL has a visualisation tool so you can see what kind of progress it is making which produced a graph that showed just how fast it managed to produce crashes and how much the return plateaus after just a few cycles. Although it was finding a new unique crash every ten minutes or so when aborted.

I dove in to analyse the crashes and it immediately became obvious the main issue was caused when the test tool attempted allocations of absurdly large bitmaps. The browser itself uses a heuristic to determine the maximum image size based on used memory and several other values. I simply applied an upper bound of 48 megabytes per decoded image which fits easily within the fuzzers default heap limit of 50 megabytes.

The main source of "hangs" also came from large allocations so once the test was fixed afl-fuzz was re-run with a timeout parameter set to 100ms. This time after several minutes no crashes and only a single hang were found which came as a great relief, at which point my laptop had a hard shutdown due to thermal event!

Once the laptop cooled down I spooled up a more appropriate system to perform this kind of work a 24way 2.1GHz Xeon system. A Debian Jessie guest vm with 20 processors and 20 gigabytes of memory was created and the build replicated and instrumented.

AFL master node display
To fully utilise this system the next test run would utilise AFL in parallel mode. In this mode there is a single "master" running all the deterministic checks and many "secondary" instances performing random tweaks.

If I have one tiny annoyance with AFL, it is that breeding and feeding a herd of rabbits by hand is annoying and something I would like to see a convenience utility for.

The warren was left overnight with 19 instances and by morning had generated crashes again. This time though the crashes actually appeared to be real failures.

$ afl-whatsup sync_dir/
Summary stats
=============

Fuzzers alive : 19
Total run time : 5 days, 12 hours
Total execs : 214 million
Cumulative speed : 8317 execs/sec
Pending paths : 0 faves, 542 total
Pending per fuzzer : 0 faves, 28 total (on average)
Crashes found : 554 locally unique

All the crashing test cases are available and a simple file command immediately showed that all the crashing test files had one thing in common the height of the image was -2147483648 This seemingly odd number is actually meaningful to a programmer, it is the largest negative number which can be stored in a 32bit integer (INT32_MIN) I immediately examined the source code that processes the height in the image header.

if ((width <= 0)   (height == 0))          
return BMP_DATA_ERROR;
if (height < 0)
bmp->reversed = true;
height = -height;

The bug is where the height is made a positive number and results in height being set to 0 after the existing check for zero and results in a crash later in execution. A simple fix was applied and test case added removing the crash and any possible future failure due to this.

Another AFL run has been started and after a few hours has yet to find a crash or non false positive hang so it looks like if there are any more crashes to find they are much harder to uncover.

Main lessons learned are:
I will of course be debugging any new crashes that occur and perhaps turning my sights to all the projects other unit tested libraries. I will also be investigating the generation of our own custom test corpus from AFL to replace the demo set, this will hopefully increase our unit test coverage even further.

Overall this has been my first successful use of a fuzzing tool and a very positive experience. I would wholeheartedly recommend using AFL to find errors and perhaps even integrate as part of a CI system.

4 May 2016

Debian Java Packaging Team: What's new since Jessie?

Jessie was released one year ago now and the Java Team has been busy preparing the next release. Here is a quick summary of the current state of the Java packages: Outlook, goals and request for help Java and Friends Package updates The packages listed below detail the changes in jessie-backports and testing. Libraries and Debian specific tools have been excluded. Packages added to jessie-backports: Packages removed from testing: Packages added to testing: Packages upgraded in testing:

27 April 2016

Niels Thykier: auto-decrufter in top 5 after 10 months

About 10 months ago, we enabled an auto-decrufter in dak. Then after 3 months it had become the top 11th remover . Today, there are only 3 humans left that have removed more packages than the auto-decrufter impressively enough, one of them is not even an active FTP-master (anymore). The current score board:
 5371 Luca Falavigna
 5121 Alexander Reichle-Schmehl
 4401 Ansgar Burchardt
 3928 DAK's auto-decrufter
 3257 Scott Kitterman
 2225 Joerg Jaspert
 1983 James Troup
 1793 Torsten Werner
 1025 Jeroen van Wolffelaar
  763 Ryan Murray
For comparison, here is the number removals by year for the past 6 years:
 5103 2011
 2765 2012
 3342 2013
 3394 2014
 3766 2015  (1842 removed by auto-decrufter)
 2845 2016  (2086 removed by auto-decrufter)
Which tells us that in 2015, the FTP masters and the decrufter performed on average over 10 removals a day. And by the looks of it, 2016 will surpass that. Of course, the auto-decrufter has a tendency to increase the number of removed items since it is an advocate of remove early, remove often! .:) Data is from https://ftp-master.debian.org/removals-full.txt. Scoreboard computed as:
  grep ftpmaster: removals-full.txt   \
   perl -pe 's/.*ftpmaster:\s+//; s/\]$//;'   \
   sort   uniq -c   sort --numeric --reverse   head -n10
Removals by year computed as:
 grep ftpmaster: removals-full.txt   \
   perl -pe 's/.* (\d 4 ) \d 2 :\d 2 :\d 2 .*/$1/'   uniq -c   tail -n6
(yes, both could be done with fewer commands)
Filed under: Debian

24 April 2016

Bits from Debian: Debian welcomes its 2016 summer interns

GSoC 2016 logo Outreachy logo We're excited to announce that Debian has selected 29 interns to work with us this summer: 4 in Outreachy, and 25 in the Google Summer of Code. Here is the list of projects and the interns who will work on them: Android SDK tools in Debian: APT - dpkg communications rework: Continuous Integration for Debian-Med packages: Extending the Debian Developer Horizon: Improving and extending AppRecommender: Improving the debsources frontend: Improving voice, video and chat communication with Free Software: MIPS and MIPSEL ports improvements: Reproducible Builds for Debian and Free Software: Support for KLEE in Debile: The Google Summer of Code and Outreachy programs are possible in Debian thanks to the effort of Debian developers and contributors that dedicate part of their free time to mentor students and outreach tasks. Join us and help extend Debian! You can follow the students weekly reports on the debian-outreach mailing-list, chat with us on our IRC channel or on each project's team mailing lists. Congratulations to all of them!

Next.

Previous.